319 research outputs found

    CLTs and asymptotic variance of time-sampled Markov chains

    Get PDF
    For a Markov transition kernel P and a probability distribution μ on nonnegative integers, a time-sampled Markov chain evolves according to the transition kernel Pμ = Σkμ(k)Pk. In this note we obtain CLT conditions for time-sampled Markov chains and derive a spectral formula for the asymptotic variance. Using these results we compare efficiency of Barker's and Metropolis algorithms in terms of asymptotic variance

    Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains

    Full text link
    A ϕ\phi-irreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace ``almost all'' by ``all,'' which is potentially important when running Markov chain Monte Carlo algorithms. Full-dimensional Metropolis--Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which Metropolis-within-Gibbs and trans-dimensional Markov chains are or are not Harris recurrent. We present a simple but natural two-dimensional counter-example showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.Comment: Published at http://dx.doi.org/10.1214/105051606000000510 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Accelerating Parallel Tempering: Quantile Tempering Algorithm (QuanTA)

    Get PDF
    Using MCMC to sample from a target distribution, π(x)\pi(x) on a dd-dimensional state space can be a difficult and computationally expensive problem. Particularly when the target exhibits multimodality, then the traditional methods can fail to explore the entire state space and this results in a bias sample output. Methods to overcome this issue include the parallel tempering algorithm which utilises an augmented state space approach to help the Markov chain traverse regions of low probability density and reach other modes. This method suffers from the curse of dimensionality which dramatically slows the transfer of mixing information from the auxiliary targets to the target of interest as dd \rightarrow \infty. This paper introduces a novel prototype algorithm, QuanTA, that uses a Gaussian motivated transformation in an attempt to accelerate the mixing through the temperature schedule of a parallel tempering algorithm. This new algorithm is accompanied by a comprehensive theoretical analysis quantifying the improved efficiency and scalability of the approach; concluding that under weak regularity conditions the new approach gives accelerated mixing through the temperature schedule. Empirical evidence of the effectiveness of this new algorithm is illustrated on canonical examples

    Minimising MCMC variance via diffusion limits, with an application to simulated tempering

    Get PDF
    We derive new results comparing the asymptotic variance of diffusions by writing them as appropriate limits of discrete-time birth-death chains which themselves satisfy Peskun orderings. We then apply our results to simulated tempering algorithms to establish which choice of inverse temperatures minimises the asymptotic variance of all functionals and thus leads to the most efficient MCMC algorithm.Comment: Published in at http://dx.doi.org/10.1214/12-AAP918 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Likelihood-based inference for correlated diffusions

    Get PDF
    We address the problem of likelihood based inference for correlated diffusion processes using Markov chain Monte Carlo (MCMC) techniques. Such a task presents two interesting problems. First, the construction of the MCMC scheme should ensure that the correlation coefficients are updated subject to the positive definite constraints of the diffusion matrix. Second, a diffusion may only be observed at a finite set of points and the marginal likelihood for the parameters based on these observations is generally not available. We overcome the first issue by using the Cholesky factorisation on the diffusion matrix. To deal with the likelihood unavailability, we generalise the data augmentation framework of Roberts and Stramer (2001 Biometrika 88(3):603-621) to d-dimensional correlated diffusions including multivariate stochastic volatility models. Our methodology is illustrated through simulation based experiments and with daily EUR /USD, GBP/USD rates together with their implied volatilities

    Adaptive Gibbs samplers and related MCMC methods

    Full text link
    We consider various versions of adaptive Gibbs and Metropolis-within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run by learning as they go in an attempt to optimize the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions.Comment: Published in at http://dx.doi.org/10.1214/11-AAP806 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1001.279

    An approximation scheme for quasi-stationary distributions of killed diffusions

    Full text link
    In this paper we study the asymptotic behavior of the normalized weighted empirical occupation measures of a diffusion process on a compact manifold which is killed at a smooth rate and then regenerated at a random location, distributed according to the weighted empirical occupation measure. We show that the weighted occupation measures almost surely comprise an asymptotic pseudo-trajectory for a certain deterministic measure-valued semiflow, after suitably rescaling the time, and that with probability one they converge to the quasi-stationary distribution of the killed diffusion. These results provide theoretical justification for a scalable quasi-stationary Monte Carlo method for sampling from Bayesian posterior distributions.Comment: v2: revised version, 29 pages, 1 figur
    corecore